205 research outputs found

    Biomarker - vom Sein und Wesen

    Get PDF
    Biomarker, ein inzwischen geradezu inflationär verwendeter Begriff! Während in den vergangenen Jahrzehnten die Bezeichnung "Marker" vor allem mit Tumorerkrankungen und deren klinisch-chemischer Diagnostik verbunden war, hat die "-omics"-Welle der letzten Dekade eine Unmenge an neuen "Markern" für alles und jedes in die medizinische Literatur gespült. Insbesondere der unkritische Umgang mit jenen Markern und die Unerfahrenheit derjenigen, die durch neue Techniken in vormals rein naturwissenschaftlichen Fachgebieten in die Lage versetzt wurden, diese neuen Marker zu messen, haben nicht nur zu einer großen Verunsicherung in bezug auf die Wertigkeit von Biomarkern an sich, sondern auch zu einer großen Enttäuschung in den wie Strohfeuer aufflammenden "-omics"-Disziplinen geführt. Kaum einer der oft hervorragend publizierten Biomarker hat den Weg in die Klinik gefunden, hochzitierten Wissenschaftlern konnten elementarste Fehler in (Prä-)analytik und Interpretation nachgewiesen werden und selbst die bisher als anerkannt angesehenen "klassischen" Tumormarker sind vielfach in Misskredit geraten. Zu Unrecht! Denn mit alten wie neuen Markern hat die Labormedizin hervorragende Werkzeuge in der Hand, die, richtig angewandt, nicht nur das Potential haben, die Labordiagnostik zu revolutionieren, sondern auch das Gesicht des Faches verändern werden, weg von einer in vitro quantifizierenden Hilfsdisziplin hin zu einer integrativen und interpretativen Wissenschaft

    Computational Evidence for Laboratory Diagnostic Pathways: Extracting Predictive Analytes for Myocardial Ischemia from Routine Hospital Data

    Get PDF
    Background: Laboratory parameters are critical parts of many diagnostic pathways, mortality scores, patient follow-ups, and overall patient care, and should therefore have underlying standardized, evidence-based recommendations. Currently, laboratory parameters and their significance are treated differently depending on expert opinions, clinical environment, and varying hospital guidelines. In our study, we aimed to demonstrate the capability of a set of algorithms to identify predictive analytes for a specific diagnosis. As an illustration of our proposed methodology, we examined the analytes associated with myocardial ischemia; it was a well-researched diagnosis and provides a substrate for comparison. We intend to present a toolset that will boost the evolution of evidence-based laboratory diagnostics and, therefore, improve patient care. Methods: The data we used consisted of preexisting, anonymized recordings from the emergency ward involving all patient cases with a measured value for troponin T. We used multiple imputation technique, orthogonal data augmentation, and Bayesian Model Averaging to create predictive models for myocardial ischemia. Each model incorporated different analytes as cofactors. In examining these models further, we could then conclude the predictive importance of each analyte in question. Results: The used algorithms extracted troponin T as a highly predictive analyte for myocardial ischemia. As this is a known relationship, we saw the predictive importance of troponin T as a proof of concept, suggesting a functioning method. Additionally, we could demonstrate the algorithm’s capabilities to extract known risk factors of myocardial ischemia from the data. Conclusion: In this pilot study, we chose an assembly of algorithms to analyze the value of analytes in predicting myocardial ischemia. By providing reliable correlations between the analytes and the diagnosis of myocardial ischemia, we demonstrated the possibilities to create unbiased computational-based guidelines for laboratory diagnostics by using computational power in today’s era of digitalization

    Sparse Proteomics Analysis - A compressed sensing-based approach for feature selection and classification of high-dimensional proteomics mass spectrometry data

    Get PDF
    Background: High-throughput proteomics techniques, such as mass spectrometry (MS)-based approaches, produce very high-dimensional data-sets. In a clinical setting one is often interested in how mass spectra differ between patients of different classes, for example spectra from healthy patients vs. spectra from patients having a particular disease. Machine learning algorithms are needed to (a) identify these discriminating features and (b) classify unknown spectra based on this feature set. Since the acquired data is usually noisy, the algorithms should be robust against noise and outliers, while the identified feature set should be as small as possible. Results: We present a new algorithm, Sparse Proteomics Analysis (SPA), based on the theory of compressed sensing that allows us to identify a minimal discriminating set of features from mass spectrometry data-sets. We show (1) how our method performs on artificial and real-world data-sets, (2) that its performance is competitive with standard (and widely used) algorithms for analyzing proteomics data, and (3) that it is robust against random and systematic noise. We further demonstrate the applicability of our algorithm to two previously published clinical data-sets

    Big Data in Laboratory Medicine—FAIR Quality for AI?

    Get PDF
    Laboratory medicine is a digital science. Every large hospital produces a wealth of data each day—from simple numerical results from, e.g., sodium measurements to highly complex output of “-omics” analyses, as well as quality control results and metadata. Processing, connecting, storing, and ordering extensive parts of these individual data requires Big Data techniques. Whereas novel technologies such as artificial intelligence and machine learning have exciting application for the augmentation of laboratory medicine, the Big Data concept remains fundamental for any sophisticated data analysis in large databases. To make laboratory medicine data optimally usable for clinical and research purposes, they need to be FAIR: findable, accessible, interoperable, and reusable. This can be achieved, for example, by automated recording, connection of devices, efficient ETL (Extract, Transform, Load) processes, careful data governance, and modern data security solutions. Enriched with clinical data, laboratory medicine data allow a gain in pathophysiological insights, can improve patient care, or can be used to develop reference intervals for diagnostic purposes. Nevertheless, Big Data in laboratory medicine do not come without challenges: the growing number of analyses and data derived from them is a demanding task to be taken care of. Laboratory medicine experts are and will be needed to drive this development, take an active role in the ongoing digitalization, and provide guidance for their clinical colleagues engaging with the laboratory data in research

    Real-world Health Data and Precision for the Diagnosis of Acute Kidney Injury, Acute-on-Chronic Kidney Disease, and Chronic Kidney Disease: Observational Study.

    Get PDF
    BACKGROUND The criteria for the diagnosis of kidney disease outlined in the Kidney Disease: Improving Global Outcomes guidelines are based on a patient's current, historical, and baseline data. The diagnosis of acute kidney injury, chronic kidney disease, and acute-on-chronic kidney disease requires previous measurements of creatinine, back-calculation, and the interpretation of several laboratory values over a certain period. Diagnoses may be hindered by unclear definitions of the individual creatinine baseline and rough ranges of normal values that are set without adjusting for age, ethnicity, comorbidities, and treatment. The classification of correct diagnoses and sufficient staging improves coding, data quality, reimbursement, the choice of therapeutic approach, and a patient's outcome. OBJECTIVE In this study, we aim to apply a data-driven approach to assign diagnoses of acute, chronic, and acute-on-chronic kidney diseases with the help of a complex rule engine. METHODS Real-time and retrospective data from the hospital's clinical data warehouse of inpatient and outpatient cases treated between 2014 and 2019 were used. Delta serum creatinine, baseline values, and admission and discharge data were analyzed. A Kidney Disease: Improving Global Outcomes-based SQL algorithm applied specific diagnosis-based International Classification of Diseases (ICD) codes to inpatient stays. Text mining on discharge documentation was also conducted to measure the effects on diagnosis. RESULTS We show that this approach yielded an increased number of diagnoses (4491 cases in 2014 vs 11,124 cases of ICD-coded kidney disease and injury in 2019) and higher precision in documentation and coding. The percentage of unspecific ICD N19-coded diagnoses of N19 codes generated dropped from 19.71% (1544/7833) in 2016 to 4.38% (416/9501) in 2019. The percentage of specific ICD N18-coded diagnoses of N19 codes generated increased from 50.1% (3924/7833) in 2016 to 62.04% (5894/9501) in 2019. CONCLUSIONS Our data-driven method supports the process and reliability of diagnosis and staging and improves the quality of documentation and data. Measuring patient outcomes will be the next step in this project

    The BioRef Infrastructure, a Framework for Real-Time, Federated, Privacy-Preserving, and Personalized Reference Intervals: Design, Development, and Application.

    Get PDF
    BACKGROUND Reference intervals (RIs) for patient test results are in standard use across many medical disciplines, allowing physicians to identify measurements indicating potentially pathological states with relative ease. The process of inferring cohort-specific RIs is, however, often ignored because of the high costs and cumbersome efforts associated with it. Sophisticated analysis tools are required to automatically infer relevant and locally specific RIs directly from routine laboratory data. These tools would effectively connect clinical laboratory databases to physicians and provide personalized target ranges for the respective cohort population. OBJECTIVE This study aims to describe the BioRef infrastructure, a multicentric governance and IT framework for the estimation and assessment of patient group-specific RIs from routine clinical laboratory data using an innovative decentralized data-sharing approach and a sophisticated, clinically oriented graphical user interface for data analysis. METHODS A common governance agreement and interoperability standards have been established, allowing the harmonization of multidimensional laboratory measurements from multiple clinical databases into a unified "big data" resource. International coding systems, such as the International Classification of Diseases, Tenth Revision (ICD-10); unique identifiers for medical devices from the Global Unique Device Identification Database; type identifiers from the Global Medical Device Nomenclature; and a universal transfer logic, such as the Resource Description Framework (RDF), are used to align the routine laboratory data of each data provider for use within the BioRef framework. With a decentralized data-sharing approach, the BioRef data can be evaluated by end users from each cohort site following a strict "no copy, no move" principle, that is, only data aggregates for the intercohort analysis of target ranges are exchanged. RESULTS The TI4Health distributed and secure analytics system was used to implement the proposed federated and privacy-preserving approach and comply with the limitations applied to sensitive patient data. Under the BioRef interoperability consensus, clinical partners enable the computation of RIs via the TI4Health graphical user interface for query without exposing the underlying raw data. The interface was developed for use by physicians and clinical laboratory specialists and allows intuitive and interactive data stratification by patient factors (age, sex, and personal medical history) as well as laboratory analysis determinants (device, analyzer, and test kit identifier). This consolidated effort enables the creation of extremely detailed and patient group-specific queries, allowing the generation of individualized, covariate-adjusted RIs on the fly. CONCLUSIONS With the BioRef-TI4Health infrastructure, a framework for clinical physicians and researchers to define precise RIs immediately in a convenient, privacy-preserving, and reproducible manner has been implemented, promoting a vital part of practicing precision medicine while streamlining compliance and avoiding transfers of raw patient data. This new approach can provide a crucial update on RIs and improve patient care for personalized medicine

    Potential of Dried Blood Self-Sampling for Cyclosporine C2 Monitoring in Transplant Outpatients

    Get PDF
    Background. Close therapeutic drug monitoring of Cyclosporine (CsA) in transplant outpatients is a favourable procedure to maintain the long-term blood drug levels within their respective narrow therapeutic ranges. Compared to basal levels (C0), CsA peak levels (C2) are more predictive for transplant rejection. However, the application of C2 levels is hampered by the precise time of blood sampling and the need of qualified personnel. Therefore, we evaluated a new C2 self-obtained blood sampling in transplant outpatients using dried capillary and venous blood samples and compared the CsA levels, stability, and clinical practicability of the different procedures. Methods. 55 solid organ transplant recipients were instructed to use single-handed sampling of each 50 μL capillary blood and dried blood spots by finger prick using standard finger prick devices. We used standardized EDTA-coated capillary blood collection systems and standardized filter paper WS 903. CsA was determined by LC-MS/MS. The patients and technicians also answered a questionnaire on the procedure and sample quality. Results. The C0 and C2 levels from capillary blood collection systems (C0 [ng/mL]: 114.5 ± 44.5; C2: 578.2 ± 222.2) and capillary dried blood (C0 [ng/mL]: 175.4 ± 137.7; C2: 743.1 ± 368.1) significantly (P < .01) correlated with the drug levels of the venous blood samples (C0 [ng/mL]: 97.8 ± 37.4; C2: 511.2 ± 201.5). The correlation at C0 was ρcap.-ven. = 0.749, and ρdried blood-ven = 0.432; at C2: ρcap.-ven. = 0.861 and ρdried blood-ven = 0.711. The patients preferred the dried blood sampling because of the more simple and less painful procedure. Additionally, the sample quality of self-obtained dried blood spots for LC-MS/MS analytics was superior to the respective capillary blood samples. Conclusions. C2 self-obtained dried blood sampling can easily be performed by transplant outpatients and is therefore suitable and cost-effective for close therapeutic drug monitoring

    Longitudinal Study of the Variation in Patient Turnover and Patient-to-Nurse Ratio: Descriptive Analysis of a Swiss University Hospital

    Get PDF
    Variations in patient demand increase the challenge of balancing high-quality nursing skill mixes against budgetary constraints. Developing staffing guidelines that allow high-quality care at minimal cost requires first exploring the dynamic changes in nursing workload over the course of a day.; Accordingly, this longitudinal study analyzed nursing care supply and demand in 30-minute increments over a period of 3 years. We assessed 5 care factors: patient count (care demand), nurse count (care supply), the patient-to-nurse ratio for each nurse group, extreme supply-demand mismatches, and patient turnover (ie, number of admissions, discharges, and transfers).; Our retrospective analysis of data from the Inselspital University Hospital Bern, Switzerland included all inpatients and nurses working in their units from January 1, 2015 to December 31, 2017. Two data sources were used. The nurse staffing system (tacs) provided information about nurses and all the care they provided to patients, their working time, and admission, discharge, and transfer dates and times. The medical discharge data included patient demographics, further admission and discharge details, and diagnoses. Based on several identifiers, these two data sources were linked.; Our final dataset included more than 58 million data points for 128,484 patients and 4633 nurses across 70 units. Compared with patient turnover, fluctuations in the number of nurses were less pronounced. The differences mainly coincided with shifts (night, morning, evening). While the percentage of shifts with extreme staffing fluctuations ranged from fewer than 3% (mornings) to 30% (evenings and nights), the percentage within "normal" ranges ranged from fewer than 50% to more than 80%. Patient turnover occurred throughout the measurement period but was lowest at night.; Based on measurements of patient-to-nurse ratio and patient turnover at 30-minute intervals, our findings indicate that the patient count, which varies considerably throughout the day, is the key driver of changes in the patient-to-nurse ratio. This demand-side variability challenges the supply-side mandate to provide safe and reliable care. Detecting and describing patterns in variability such as these are key to appropriate staffing planning. This descriptive analysis was a first step towards identifying time-related variables to be considered for a predictive nurse staffing model
    corecore